- Algorithm analysis

It is sufficient to know that the running time grows proportionally to n (Big picture approach) 
If the body of the loop is executed n - 1 times <=> The time complexity is O(n)

f(n) is O(g(n)) if there exists some real constant c > 0 and some integer constant n_0 >= 1 such that
f(n) <= c g(n) for all n >= n_0

Example: 8n - 2 is O(n)
f(n) = 8n - 2 and g(n) = n

8n - 2 <= c * n for n >= n_0

c = 8 and n_0 = 1 that will make this true

- Using the Big Oh notation, we try to characterize the function as closely as possible

n^2 is O(n^4) (true) but it is better to say that n^2 is O(n^2)

- We also characterize the function in the simplest terms
n^2 is O(2n^2 + 3n + 2) but it is better to say that n^2 is O(n^2)

7 functions that we will be dealing this in class: 
Operations of the data structure
- Constant ---> O(1)
- logarithmic ---> O(log2(n))

Algorithms:
- Linear ---> O(n)
- nlog2(n) ---> O(n log2(n))

Less practical: 
- quadratic ---> O(n^2)
- Cubic ---> O(n^3)

Not practical at all:
- Exponential ---> O(2^n)

- Big Omega If Big Oh <=> <= then Big Omega <=> >=  (n^2 is Big Omega of n)
Big theta if the two functions are growing at the same rate

f(n) is Big Theta of g(n) if and only if f(n) is O(g(n)) and f(n) is also Big Omega of g(n)

- If the relationship can be easily figured out between f(n) and g(n):
take the limit as n goes to infinity of the ratio between f(n) and g(n).
log_e(n) is O(sqrt(n))

- 2^n vs n^10000000 => n^{1000000} is O(2^n)

- When analyzing an iterative solutions. Technique used consists of : 1) Identify the bottleneck; 
2) Figure out the number of time the bottleneck statement is executed. 

- In the case of a recursive solution: 

factorial(n): 
	if(n == 1) return 1
	return n x factorial(n-1)


Let us denote by T(n) the running time of this recursive solution when the input is equal to n
Step#1: Come up with a recurrence relation relating T(n) to T(n-1): 
T(n) = 1 + T(n-1)

Step#2: Consider the base case: 
T(1) = 1

Step#3: 
T(n) = T(n-1) + 1 = T(n-2) + 2 = T(n-3) + 3 .... k times = T(n-k) + k ... 
n - k = 1 => k = n - 1 k = n-1 => T(1) + n - 1 = 1 + n - 1 = n is O(n) 

T(n-1) = T(n-2) + 1
T(n-2) = T(n-3) + 1

- Recursive binary search running time analysis: 

BSRec(arr, target, left, right):
	mid = (left + right) / 2;
	if(arr[mid] == target):
		return mid
	else if arr[mid] < target:
		return BSRec(arr, target, mid + 1, right)
	else:
		return BSRec(arr, target, left, mid - 1)

n = size of arr
Step#1: recurrence relation

T(n) = 1 + T(n/2)

Step#2: base case
T(1) = 1

Step#3: expansion

T(n) = T(n/2) + 1 = T(n/2^2) + 2 = T(n/2^3) + 3 ... k times ... T(n / 2^k) + k

n / 2^k = 1 => k = log2(n)

T(n) = T(n / 2^k) + k; for k = log2(n) => T(1) + log2(n) = 1 + log2(n) is O(log2(n)) 


T(n/2) = T(n/4) + 1
T(n/4) = T(n/8) + 1

- T(n) = T(n/2) + n
T(1) = 1

Big Oh characterization of T(n)?












































